ローンチイベント: スマートAI Security 。 完全なデータ制御。 座席の指定

閉める
閉める
明日に向けたネットワーク
明日に向けたネットワーク
サポートするアプリケーションとユーザー向けに設計された、より高速で、より安全で、回復力のあるネットワークへの道を計画します。
Netskopeを体験しませんか?
Netskopeプラットフォームを実際に体験する
Netskope Oneのシングルクラウドプラットフォームを直接体験するチャンスです。自分のペースで進められるハンズオンラボにサインアップしたり、毎月のライブ製品デモに参加したり、Netskope Private Accessの無料試乗に参加したり、インストラクター主導のライブワークショップに参加したりできます。
SSEのリーダー。 現在、シングルベンダーSASEのリーダーです。
Netskope は、 SSE プラットフォームと SASE プラットフォームの両方で、ビジョンで最も優れたリーダーとして認められています
2X ガートナーマジック クアドラント SASE プラットフォームのリーダー
旅のために構築された 1 つの統合プラットフォーム
ダミーのためのジェネレーティブAIの保護
ダミーのためのジェネレーティブAIの保護
ジェネレーティブ AI の革新的な可能性と堅牢なデータ セキュリティ プラクティスのバランスを取る方法をご覧ください。
ダミーのための最新のデータ損失防止(DLP)eBook
最新の情報漏えい対策(DLP)for Dummies
クラウド配信型 DLP に移行するためのヒントとコツをご紹介します。
SASEダミーのための最新のSD-WAN ブック
SASEダミーのための最新のSD-WAN
遊ぶのをやめる ネットワークアーキテクチャに追いつく
リスクがどこにあるかを理解する
Advanced Analytics は、セキュリティ運用チームがデータ主導のインサイトを適用してより優れたポリシーを実装する方法を変革します。 Advanced Analyticsを使用すると、傾向を特定し、懸念事項に的を絞って、データを使用してアクションを実行できます。
Netskopeテクニカルサポート
Netskopeテクニカルサポート
クラウドセキュリティ、ネットワーキング、仮想化、コンテンツ配信、ソフトウェア開発など、多様なバックグラウンドを持つ全世界にいる有資格のサポートエンジニアが、タイムリーで質の高い技術支援を行っています。
Netskopeの動画
Netskopeトレーニング
Netskopeのトレーニングは、クラウドセキュリティのエキスパートになるためのステップアップに活用できます。Netskopeは、お客様のデジタルトランスフォーメーションの取り組みにおける安全確保、そしてクラウド、Web、プライベートアプリケーションを最大限に活用するためのお手伝いをいたします。

The AI Influx is Here. Time to Establish Your “Department of Know.”

Feb 17 2026

AI has forced healthcare leaders into a precarious position. While many were still debating whether the technology should be used, its value was so clear to others that many teams began using it long before it was formally vetted, governed or secured.

Clinicians have been pasting notes into ChatGPT to save time, operations teams have been testing agent-based chatbots to streamline workflows, and data scientists are integrating SaaS tools into pipelines. The boundless potential and versatility of AI means it has no roadmap—it simply arrived one day in a browser tab.

It’s this tension between organic experimentation and disciplined governance that sat at the heart of a recent Health System CIO webinar, Cyber Strategies for Securing the AI Influx.

What we sketched out over the course of the conversation was a path from “AI everywhere, unmanaged” to “AI enabled, and with purpose”. In this article, I’ll share insights and advice from the webinar on how health systems can regain control and make AI a strategic asset rather than a security hazard.

From “because it’s there” to “because it’s right”

The first trap that most organizations—healthcare or otherwise—fall into is adopting AI just because it’s available.

The panel likened it to the early days of telemedicine during COVID, when tools were going live very quickly but governance, integration and long-term risk were often after-thoughts. This is the scenario every CISO fears: confidential data drifts into AI tools and begins influencing clinical or operational decisions without proper checks. As a result, you’re not only putting Protected Health Information (PHI) at risk, but also everything from staffing models to strategic plans and, most critically, uptime and patient safety.

So before you switch an AI tool on, ask where its outputs will show up in workflows and what happens if it’s wrong. Even well-meaning automated actions, such as AI-assisted account lockouts, can’t be allowed to derail something as time-sensitive as surgery or push clinicians into unsafe workarounds.

It’s not about extremes either. When you “block everything,” you kill innovation; when you “allow everything,” you kill your ability to control. So, aim for the middle: adaptive policies, enterprise-controlled AI and clear guardrails so you can protect data and still move fast.

From the “Department of No” to the “Department of Know”

If AI is forcing anything to evolve, it’s governance culture.

The old model most of us recognize is to view security as the “department of no”—the place where requests go in and are delayed long enough that people eventually stop asking. That doesn’t work in healthcare, where clinical teams will push forward regardless and the stakes are much higher than a delayed product launch.

Instead of being known for blocking, imagine security as the place where people go for understanding: the “department of know(ledge)”. Security teams that know where data lives, how it flows between systems, which AI tools are touching it and on what terms.

That calls for a model where security owns around a third of risk decisions, with compliance and data governance sharing the rest between them. A “33 percent rule” like this ensures no single business function is judge, jury and executioner. It moves the organization away from a “the CISO will solve it” mentality and toward a genuinely shared view of AI risk.

The conversation then shifts. Less “can we do this?” and more “what needs to be true for us to do this safely?”

“Security shouldn’t be dictating all components, doing the blocking, setting the strategy, doing everything around that. You need to have other stakeholders at the table.”
— Steven Ramirez, VP & Chief Information Security & Technology Officer (CISTO), Renown Health

You can’t secure what you pretend not to see

After years of the focus being on digital transformation, AI is pushing healthcare into a new phase: data transformation and governance. AI is only as strong as the data underneath it, making hygiene, ownership and classification non-negotiable.

And while many organizations still avoid full data discovery, remember that AI will learn from whatever it can access. So, ignoring pockets of exposure doesn’t make you safer; it just ensures you discover problems later, when they’re harder to contain.

Protecting AI starts with protecting data; you need to know where sensitive information is at rest, in motion and in use, and apply consistent controls across all instances. That requires real partnership between security and data teams, because just as you can’t protect what you don’t understand, data leaders can’t drive AI safely without getting a grip on risk, compliance and privacy.

Tabletop exercise sessions that simulate AI security incidents in real time can help improve both risk awareness and data quality. But more broadly, it’s about moving from gatekeeping to enabling, with security providing data owners with the visibility and guardrails to make informed decisions about how AI interacts with their data. 

Partnering with users beats blocking them every time, because the people closest to the workflow know where the edge cases are and they’ll surface issues early if they see security as a collaborator rather than an obstacle.

“I think AI is kind of forcing our hand, as we did with the digital transformation, to more of a data transformation, making sure we have more governance and ownership in those respective areas.”
— Steven Ramirez, VP & Chief Information Security & Technology Officer (CISTO), Renown Health

AI as a threat multiplier, and a defensive ally

AI isn’t just changing how healthcare delivers care, but also how attackers operate, with AI-driven spear-phishing already outperforming traditional campaigns, and nation-state actors layering AI across the kill chain, from recon to payload.

So, security teams have to respond in kind, using AI defensively to watch behavior, learn what “normal” looks like and spot subtle anomalies in near real time. Instead of relying on  disconnected toolsets, you need an integrated digital ecosystem: platforms that talk to each other seamlessly, support unified policies and anchor AI inside your core EHR.

And as governance tightens to match the urgency of the times, oversight is moving from quarterly to monthly or even weekly, as AI pilots accelerate, with the goal of evaluating new tools quickly without abandoning consistency.

The webinar ended with the panel offering four simple moves you can follow: 

  1. Don’t just patch; think about compensating controls: Ask what stands between you and a serious incident if a new AI-enabled attack path is exploited tomorrow.
  2. Be a storyteller and partner, not a hall monitor: To keep your seat at the table, you have to help the business see how secure AI can actually move it forward.
  3. Get a handle on shadow and SaaS AI sprawl: Know what tools people are using, how data flows through them and where AI functionality has been introduced, and put controls in place before, not after, a data loss event.
  4. Shift from reactive to proactive: Enable the business, maintain oversight and treat AI as an ongoing, collaborative program, not a one-off project.

Ultimately, evolving from the “Department of No” to the “Department of Know” is a shift that will enable you to turn the AI influx from a source of chaos into a capability you can steer toward safer care, tighter control and smarter innovation.

“Good data governance and good data protection strategies help in the cleanliness of data, making sure we have good data sources that we’re tracking. And it really makes sure that everybody’s doing their homework on making sure that the data stewards and data owners are really attributed to that.”
— Steven Ramirez, VP & Chief Information Security & Technology Officer (CISTO), Renown Health

Watch the on-demand webinar to hear how top cyber leaders are moving beyond “no” to become true innovation partners. Learn to enforce zero trust protection and gain the real-time visibility needed to secure the future of care.

Going to HIMSS 2026? Get ready to see the future of clinical security and grab a front-row seat for a live demo at the Netskope Booth #10107. Keep up with everything Netskope has going on at HIMSS here.

author image
Damian Chung
Damian Chung is a cybersecurity leader with over ten years of security experience in healthcare. Damian is responsible for corporate security tools and processes.
Damian Chung is a cybersecurity leader with over ten years of security experience in healthcare. Damian is responsible for corporate security tools and processes.
Netskopeとつながる

Subscribe to the Netskope Blog

Sign up to receive a roundup of the latest Netskope content delivered directly in your inbox every month.